Learning Identity-Consistent Feature for Cross-Modality Person Re-Identification via Pixel and Feature Alignment

نویسندگان

چکیده

RGB-IR cross-modality person re-identification (ReID) can be seen as a multicamera retrieval problem that aims to match pedestrian images captured by visible and infrared cameras. Most of the existing methods focus on reducing modality differences through feature representation learning. However, they ignore huge difference in pixel space between two modalities. Unlike these methods, we utilize alignment network (PFANet) reduce modal while aligning features this paper. Our model contains three components, including extractor, generator, joint discriminator. Like previous generator discriminator are used generate high-quality images; however, make substantial improvements extraction module. Firstly, fuse batch normalization global attention (BNG) which pay channel information conducting interaction channels spaces. Secondly, alleviate space, propose mitigation module (MMM). Then, jointly training entire model, our is able not only mitigate intramodality variations but also learn identity-consistent features. Finally, extensive experimental results show outperforms other methods. On SYSU-MM01 dataset, achieves rank-1 accuracy 40.83 % an mAP id="M2"> 39.84 .

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Image alignment via kernelized feature learning

Machine learning is an application of artificial intelligence that is able to automatically learn and improve from experience without being explicitly programmed. The primary assumption for most of the machine learning algorithms is that the training set (source domain) and the test set (target domain) follow from the same probability distribution. However, in most of the real-world application...

متن کامل

Person Re-identification via Recurrent Feature Aggregation

We address the person re-identification problem by effectively exploiting a globally discriminative feature representation from a sequence of tracked human regions/patches. This is in contrast to previous person re-id works, which rely on either single frame based person to person patch matching, or graph based sequence to sequence matching. We show that a progressive/sequential fusion framewor...

متن کامل

Hierarchical Invariant Feature Learning with Marginalization for Person Re-Identification

This paper addresses the problem of matching pedestrians across multiple camera views, known as person re-identification. Variations in lighting conditions, environment and pose changes across camera views make re-identification a challenging problem. Previous methods address these challenges by designing specific features or by learning a distance function. We propose a hierarchical feature le...

متن کامل

Deep Feature Learning via Structured Graph Laplacian Embedding for Person Re-Identification

Learning the distance metric between pairs of examples is of great importance for visual recognition, especially for person re-identification (Re-Id). Recently, the contrastive and triplet loss are proposed to enhance the discriminative power of the deeply learned features, and have achieved remarkable success. As can be seen, either the contrastive or triplet loss is just one special case of t...

متن کامل

Supplementary Material for “RGB-Infrared Cross-Modality Person Re-Identification”

This supplementary material accompanies the paper “RGB-Infrared Cross-Modality Person Re-Identification”. It includes more details of Section 4, as well as extra evaluations of our proposed deep zero-padding method. 1. Details of Counting Domain-Specific Nodes In the third paragraph of Section 4.2 in the main manuscript, we quantify the number of domain-specific nodes in the trained network in ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mobile Information Systems

سال: 2022

ISSN: ['1875-905X', '1574-017X']

DOI: https://doi.org/10.1155/2022/4131322